21 research outputs found

    Nightside condensation of iron in an ultra-hot giant exoplanet

    Get PDF
    Ultra-hot giant exoplanets receive thousands of times Earth's insolation. Their high-temperature atmospheres (>2,000 K) are ideal laboratories for studying extreme planetary climates and chemistry. Daysides are predicted to be cloud-free, dominated by atomic species and substantially hotter than nightsides. Atoms are expected to recombine into molecules over the nightside, resulting in different day-night chemistry. While metallic elements and a large temperature contrast have been observed, no chemical gradient has been measured across the surface of such an exoplanet. Different atmospheric chemistry between the day-to-night ("evening") and night-to-day ("morning") terminators could, however, be revealed as an asymmetric absorption signature during transit. Here, we report the detection of an asymmetric atmospheric signature in the ultra-hot exoplanet WASP-76b. We spectrally and temporally resolve this signature thanks to the combination of high-dispersion spectroscopy with a large photon-collecting area. The absorption signal, attributed to neutral iron, is blueshifted by -11+/-0.7 km s-1 on the trailing limb, which can be explained by a combination of planetary rotation and wind blowing from the hot dayside. In contrast, no signal arises from the nightside close to the morning terminator, showing that atomic iron is not absorbing starlight there. Iron must thus condense during its journey across the nightside.Comment: Published in Nature (Accepted on 24 January 2020.) 33 pages, 11 figures, 3 table

    Abstract

    No full text
    In this work, we study an information filtering model where the relevance labels associated to a sequence of feature vectors are realizations of an unknown probabilistic linear function. Building on the analysis of a restricted version of our model, we derive a general filtering rule based on the margin of a ridge regression estimator. While our rule may observe the label of a vector only by classfying the vector as relevant, experiments on a real-world document filtering problem show that the performance of our rule is close to that of the on-line classifier which is allowed to observe all labels. These empirical results are complemented by a theoretical analysis where we consider a randomized variant of our rule and prove that its expected number of mistakes is never much larger than that of the optimal filtering rule which knows the hidden linear model.

    A second-order perceptron algorithm

    No full text
    Kernel-based linear-threshold algorithms, such as support vector machines and Perceptron-like algorithms, are among the best available techniques for solving pattern classification problems. In this paper, we describe an extension of the classical Perceptron algorithm, called second-order Perceptron, and analyze its performance within the mistake bound model of on-line learning. The bound achieved by our algorithm depends on the sensitivity to second-order data information and is the best known mistake bound for (efficient) kernel-based linear-threshold classifiers to date. This mistake bound, which strictly generalizes the well-known Perceptron bound, is expressed in terms of the eigenvalues of the empirical data correlation matrix and depends on a parameter controlling the sensitivity of the algorithm to the distribution of these eigenvalues. Since the optimal setting of this parameter is not known a priori, we also analyze two variants of the second-order Perceptron algorithm: one that adaptively sets the value of the parameter in terms of the number of mistakes made so far, and one that is parameterless, based on pseudoinverses

    On the generalization ability of on-line learning algorithms

    No full text
    In this paper we show that on-line algorithms for classification and regression can be naturally used to obtain hypotheses with good datadependent tail bounds on their risk. Our results are proven without requiring complicated concentration-of-measure arguments and they hold for arbitrary on-line learning algorithms. Furthermore, when applied to concrete on-line algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.
    corecore